Skip to content

Support batched preprocessing#101

Merged
tastelikefeet merged 3 commits intomodelscope:mainfrom
tastelikefeet:feat/batched_preprocess
Mar 5, 2026
Merged

Support batched preprocessing#101
tastelikefeet merged 3 commits intomodelscope:mainfrom
tastelikefeet:feat/batched_preprocess

Conversation

@tastelikefeet
Copy link
Copy Markdown
Collaborator

PR type

  • Bug Fix
  • New Feature
  • Document Updates
  • More Models or Datasets Support

PR information

Write the detail information belongs to this PR.

Experiment results

Paste your experiment result here(if needed).

(cherry picked from commit 36d9731c178f092f1e766c2f995985c27fbac9f4)
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the data preprocessing pipeline by introducing comprehensive support for batched processing. This architectural change allows preprocessors to operate on multiple data samples concurrently, which is expected to yield substantial improvements in efficiency and performance, particularly for large-scale data operations. The update involves a fundamental redesign of the Preprocessor interface and its underlying mechanisms to accommodate this new, more optimized processing paradigm.

Highlights

  • Preprocessor Interface Refactoring: The core Preprocessor class and its implementations have been refactored to support batched processing. The __call__ method now accepts and returns collections of samples, with single-sample logic moved to a new preprocess method.
  • Batching Utility Methods: New static methods, map_col_to_row and map_row_to_col, were added to the base Preprocessor to facilitate conversion between column-oriented and row-oriented data formats, essential for handling batches.
  • Dataset Map Function Update: The dataset.map function now enables batched processing by default, removing a previous temporary restriction and allowing preprocessors to operate on multiple samples simultaneously.
  • Documentation Updates: Both English and Chinese documentation for the Preprocessor component have been updated to reflect the new batched input/output interface and usage patterns.
  • Specific Preprocessor Adaptations: Several existing preprocessor implementations, including MathPreprocessor, LatexOCRProcessor, and various LLM-related processors, have been updated to conform to the new batched processing interface.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • cookbook/client/tinker/custom_service/short_math_grpo.py
    • Refactored MathPreprocessor to use a new preprocess method for single samples and updated the __call__ method to handle batches of rows, converting between column and row formats.
  • cookbook/client/tinker/modelscope_service/short_math_grpo.py
    • Refactored MathPreprocessor to use a new preprocess method for single samples and updated the __call__ method to handle batches of rows, converting between column and row formats.
  • cookbook/mm/fsdp2.py
    • Refactored LatexOCRProcessor to use a new preprocess method for single samples and updated the __call__ method to handle batches of rows, converting between column and row formats.
  • docs/source_en/Components/Preprocessor and Filter/Preprocessor.md
    • Updated the Preprocessor class signature in the documentation to reflect the new batched input/output format (List[Dict] to List[Trajectory]).
    • Clarified that the __call__ method now expects a list of samples and returns a list of trajectories.
  • docs/source_zh/组件/预处理器和过滤器/Preprocessor.md
    • Updated the Preprocessor class signature in the Chinese documentation to reflect the new batched input/output format (List[Dict] to List[Trajectory]).
    • Clarified that the __call__ method now expects a list of samples and returns a list of trajectories, and that input/output counts may differ.
  • src/twinkle/dataset/base.py
    • Changed the batched argument in the map function from False to True by default, enabling batched processing for datasets.
    • The filter function's batched argument remains False.
  • src/twinkle/preprocessor/base.py
    • Added map_col_to_row and map_row_to_col static methods to convert between column-oriented and row-oriented data representations.
    • Modified the __call__ method signature to accept and return dictionaries of lists (batched data).
  • src/twinkle/preprocessor/llm.py
    • Refactored CompetitionMathProcessor, CompetitionMathGRPOProcessor, SelfCognitionProcessor, AlpacaProcessor, CountdownProcessor, and GSM8KProcessor to support batched processing.
    • Each now has a preprocess method for single samples and an updated __call__ method that handles batch conversion.
  • tests/preprocessor/test_preprocessor.py
    • Updated the ModifiedProcessor test class to align with the new batched Preprocessor interface, specifically by introducing a preprocess method and modifying the __call__ method to handle batches.
Activity
  • No specific activity (comments, reviews, progress updates) was provided in the context for this pull request.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces batched preprocessing capabilities by refactoring the __call__ method in Preprocessor and its subclasses, adding map_col_to_row and map_row_to_col static methods for efficient data handling. However, the implementation of these batching helper methods in Preprocessor is fragile and can lead to crashes or data loss when encountering inconsistent data, posing a potential Denial of Service risk. It is recommended to improve the robustness of these methods to handle empty inputs and varying row structures.

@tastelikefeet tastelikefeet merged commit 76a20a3 into modelscope:main Mar 5, 2026
1 of 3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants